Перевод: со всех языков на английский

с английского на все языки

The discoveries scientists have made in

  • 1 улучшать

    The discoveries scientists have made in bettering the flavour and texture of margarine...

    This further enhances the appearance of the apparatus.

    The equipment should gain in performance through the application of...

    Efforts to improve ( upon) the characteristics...

    This makes for good circulation.

    Glass and mica will upgrade organic materials.

    Русско-английский научно-технический словарь переводчика > улучшать

  • 2 Artificial Intelligence

       In my opinion, none of [these programs] does even remote justice to the complexity of human mental processes. Unlike men, "artificially intelligent" programs tend to be single minded, undistractable, and unemotional. (Neisser, 1967, p. 9)
       Future progress in [artificial intelligence] will depend on the development of both practical and theoretical knowledge.... As regards theoretical knowledge, some have sought a unified theory of artificial intelligence. My view is that artificial intelligence is (or soon will be) an engineering discipline since its primary goal is to build things. (Nilsson, 1971, pp. vii-viii)
       Most workers in AI [artificial intelligence] research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the last 25 years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from being realized in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised.... In the meantime, claims and predictions regarding the potential results of AI research had been publicized which went even farther than the expectations of the majority of workers in the field, whose embarrassments have been added to by the lamentable failure of such inflated predictions....
       When able and respected scientists write in letters to the present author that AI, the major goal of computing science, represents "another step in the general process of evolution"; that possibilities in the 1980s include an all-purpose intelligence on a human-scale knowledge base; that awe-inspiring possibilities suggest themselves based on machine intelligence exceeding human intelligence by the year 2000 [one has the right to be skeptical]. (Lighthill, 1972, p. 17)
       4) Just as Astronomy Succeeded Astrology, the Discovery of Intellectual Processes in Machines Should Lead to a Science, Eventually
       Just as astronomy succeeded astrology, following Kepler's discovery of planetary regularities, the discoveries of these many principles in empirical explorations on intellectual processes in machines should lead to a science, eventually. (Minsky & Papert, 1973, p. 11)
       Many problems arise in experiments on machine intelligence because things obvious to any person are not represented in any program. One can pull with a string, but one cannot push with one.... Simple facts like these caused serious problems when Charniak attempted to extend Bobrow's "Student" program to more realistic applications, and they have not been faced up to until now. (Minsky & Papert, 1973, p. 77)
       What do we mean by [a symbolic] "description"? We do not mean to suggest that our descriptions must be made of strings of ordinary language words (although they might be). The simplest kind of description is a structure in which some features of a situation are represented by single ("primitive") symbols, and relations between those features are represented by other symbols-or by other features of the way the description is put together. (Minsky & Papert, 1973, p. 11)
       [AI is] the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular. (Boden, 1977, p. 5)
       The word you look for and hardly ever see in the early AI literature is the word knowledge. They didn't believe you have to know anything, you could always rework it all.... In fact 1967 is the turning point in my mind when there was enough feeling that the old ideas of general principles had to go.... I came up with an argument for what I called the primacy of expertise, and at the time I called the other guys the generalists. (Moses, quoted in McCorduck, 1979, pp. 228-229)
       9) Artificial Intelligence Is Psychology in a Particularly Pure and Abstract Form
       The basic idea of cognitive science is that intelligent beings are semantic engines-in other words, automatic formal systems with interpretations under which they consistently make sense. We can now see why this includes psychology and artificial intelligence on a more or less equal footing: people and intelligent computers (if and when there are any) turn out to be merely different manifestations of the same underlying phenomenon. Moreover, with universal hardware, any semantic engine can in principle be formally imitated by a computer if only the right program can be found. And that will guarantee semantic imitation as well, since (given the appropriate formal behavior) the semantics is "taking care of itself" anyway. Thus we also see why, from this perspective, artificial intelligence can be regarded as psychology in a particularly pure and abstract form. The same fundamental structures are under investigation, but in AI, all the relevant parameters are under direct experimental control (in the programming), without any messy physiology or ethics to get in the way. (Haugeland, 1981b, p. 31)
       There are many different kinds of reasoning one might imagine:
        Formal reasoning involves the syntactic manipulation of data structures to deduce new ones following prespecified rules of inference. Mathematical logic is the archetypical formal representation. Procedural reasoning uses simulation to answer questions and solve problems. When we use a program to answer What is the sum of 3 and 4? it uses, or "runs," a procedural model of arithmetic. Reasoning by analogy seems to be a very natural mode of thought for humans but, so far, difficult to accomplish in AI programs. The idea is that when you ask the question Can robins fly? the system might reason that "robins are like sparrows, and I know that sparrows can fly, so robins probably can fly."
        Generalization and abstraction are also natural reasoning process for humans that are difficult to pin down well enough to implement in a program. If one knows that Robins have wings, that Sparrows have wings, and that Blue jays have wings, eventually one will believe that All birds have wings. This capability may be at the core of most human learning, but it has not yet become a useful technique in AI.... Meta- level reasoning is demonstrated by the way one answers the question What is Paul Newman's telephone number? You might reason that "if I knew Paul Newman's number, I would know that I knew it, because it is a notable fact." This involves using "knowledge about what you know," in particular, about the extent of your knowledge and about the importance of certain facts. Recent research in psychology and AI indicates that meta-level reasoning may play a central role in human cognitive processing. (Barr & Feigenbaum, 1981, pp. 146-147)
       Suffice it to say that programs already exist that can do things-or, at the very least, appear to be beginning to do things-which ill-informed critics have asserted a priori to be impossible. Examples include: perceiving in a holistic as opposed to an atomistic way; using language creatively; translating sensibly from one language to another by way of a language-neutral semantic representation; planning acts in a broad and sketchy fashion, the details being decided only in execution; distinguishing between different species of emotional reaction according to the psychological context of the subject. (Boden, 1981, p. 33)
       Can the synthesis of Man and Machine ever be stable, or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens-and I have... good reasons for thinking that it must-we have nothing to regret and certainly nothing to fear. (Clarke, 1984, p. 243)
       The thesis of GOFAI... is not that the processes underlying intelligence can be described symbolically... but that they are symbolic. (Haugeland, 1985, p. 113)
        14) Artificial Intelligence Provides a Useful Approach to Psychological and Psychiatric Theory Formation
       It is all very well formulating psychological and psychiatric theories verbally but, when using natural language (even technical jargon), it is difficult to recognise when a theory is complete; oversights are all too easily made, gaps too readily left. This is a point which is generally recognised to be true and it is for precisely this reason that the behavioural sciences attempt to follow the natural sciences in using "classical" mathematics as a more rigorous descriptive language. However, it is an unfortunate fact that, with a few notable exceptions, there has been a marked lack of success in this application. It is my belief that a different approach-a different mathematics-is needed, and that AI provides just this approach. (Hand, quoted in Hand, 1985, pp. 6-7)
       We might distinguish among four kinds of AI.
       Research of this kind involves building and programming computers to perform tasks which, to paraphrase Marvin Minsky, would require intelligence if they were done by us. Researchers in nonpsychological AI make no claims whatsoever about the psychological realism of their programs or the devices they build, that is, about whether or not computers perform tasks as humans do.
       Research here is guided by the view that the computer is a useful tool in the study of mind. In particular, we can write computer programs or build devices that simulate alleged psychological processes in humans and then test our predictions about how the alleged processes work. We can weave these programs and devices together with other programs and devices that simulate different alleged mental processes and thereby test the degree to which the AI system as a whole simulates human mentality. According to weak psychological AI, working with computer models is a way of refining and testing hypotheses about processes that are allegedly realized in human minds.
    ... According to this view, our minds are computers and therefore can be duplicated by other computers. Sherry Turkle writes that the "real ambition is of mythic proportions, making a general purpose intelligence, a mind." (Turkle, 1984, p. 240) The authors of a major text announce that "the ultimate goal of AI research is to build a person or, more humbly, an animal." (Charniak & McDermott, 1985, p. 7)
       Research in this field, like strong psychological AI, takes seriously the functionalist view that mentality can be realized in many different types of physical devices. Suprapsychological AI, however, accuses strong psychological AI of being chauvinisticof being only interested in human intelligence! Suprapsychological AI claims to be interested in all the conceivable ways intelligence can be realized. (Flanagan, 1991, pp. 241-242)
        16) Determination of Relevance of Rules in Particular Contexts
       Even if the [rules] were stored in a context-free form the computer still couldn't use them. To do that the computer requires rules enabling it to draw on just those [ rules] which are relevant in each particular context. Determination of relevance will have to be based on further facts and rules, but the question will again arise as to which facts and rules are relevant for making each particular determination. One could always invoke further facts and rules to answer this question, but of course these must be only the relevant ones. And so it goes. It seems that AI workers will never be able to get started here unless they can settle the problem of relevance beforehand by cataloguing types of context and listing just those facts which are relevant in each. (Dreyfus & Dreyfus, 1986, p. 80)
       Perhaps the single most important idea to artificial intelligence is that there is no fundamental difference between form and content, that meaning can be captured in a set of symbols such as a semantic net. (G. Johnson, 1986, p. 250)
        18) The Assumption That the Mind Is a Formal System
       Artificial intelligence is based on the assumption that the mind can be described as some kind of formal system manipulating symbols that stand for things in the world. Thus it doesn't matter what the brain is made of, or what it uses for tokens in the great game of thinking. Using an equivalent set of tokens and rules, we can do thinking with a digital computer, just as we can play chess using cups, salt and pepper shakers, knives, forks, and spoons. Using the right software, one system (the mind) can be mapped into the other (the computer). (G. Johnson, 1986, p. 250)
        19) A Statement of the Primary and Secondary Purposes of Artificial Intelligence
       The primary goal of Artificial Intelligence is to make machines smarter.
       The secondary goals of Artificial Intelligence are to understand what intelligence is (the Nobel laureate purpose) and to make machines more useful (the entrepreneurial purpose). (Winston, 1987, p. 1)
       The theoretical ideas of older branches of engineering are captured in the language of mathematics. We contend that mathematical logic provides the basis for theory in AI. Although many computer scientists already count logic as fundamental to computer science in general, we put forward an even stronger form of the logic-is-important argument....
       AI deals mainly with the problem of representing and using declarative (as opposed to procedural) knowledge. Declarative knowledge is the kind that is expressed as sentences, and AI needs a language in which to state these sentences. Because the languages in which this knowledge usually is originally captured (natural languages such as English) are not suitable for computer representations, some other language with the appropriate properties must be used. It turns out, we think, that the appropriate properties include at least those that have been uppermost in the minds of logicians in their development of logical languages such as the predicate calculus. Thus, we think that any language for expressing knowledge in AI systems must be at least as expressive as the first-order predicate calculus. (Genesereth & Nilsson, 1987, p. viii)
        21) Perceptual Structures Can Be Represented as Lists of Elementary Propositions
       In artificial intelligence studies, perceptual structures are represented as assemblages of description lists, the elementary components of which are propositions asserting that certain relations hold among elements. (Chase & Simon, 1988, p. 490)
       Artificial intelligence (AI) is sometimes defined as the study of how to build and/or program computers to enable them to do the sorts of things that minds can do. Some of these things are commonly regarded as requiring intelligence: offering a medical diagnosis and/or prescription, giving legal or scientific advice, proving theorems in logic or mathematics. Others are not, because they can be done by all normal adults irrespective of educational background (and sometimes by non-human animals too), and typically involve no conscious control: seeing things in sunlight and shadows, finding a path through cluttered terrain, fitting pegs into holes, speaking one's own native tongue, and using one's common sense. Because it covers AI research dealing with both these classes of mental capacity, this definition is preferable to one describing AI as making computers do "things that would require intelligence if done by people." However, it presupposes that computers could do what minds can do, that they might really diagnose, advise, infer, and understand. One could avoid this problematic assumption (and also side-step questions about whether computers do things in the same way as we do) by defining AI instead as "the development of computers whose observable performance has features which in humans we would attribute to mental processes." This bland characterization would be acceptable to some AI workers, especially amongst those focusing on the production of technological tools for commercial purposes. But many others would favour a more controversial definition, seeing AI as the science of intelligence in general-or, more accurately, as the intellectual core of cognitive science. As such, its goal is to provide a systematic theory that can explain (and perhaps enable us to replicate) both the general categories of intentionality and the diverse psychological capacities grounded in them. (Boden, 1990b, pp. 1-2)
       Because the ability to store data somewhat corresponds to what we call memory in human beings, and because the ability to follow logical procedures somewhat corresponds to what we call reasoning in human beings, many members of the cult have concluded that what computers do somewhat corresponds to what we call thinking. It is no great difficulty to persuade the general public of that conclusion since computers process data very fast in small spaces well below the level of visibility; they do not look like other machines when they are at work. They seem to be running along as smoothly and silently as the brain does when it remembers and reasons and thinks. On the other hand, those who design and build computers know exactly how the machines are working down in the hidden depths of their semiconductors. Computers can be taken apart, scrutinized, and put back together. Their activities can be tracked, analyzed, measured, and thus clearly understood-which is far from possible with the brain. This gives rise to the tempting assumption on the part of the builders and designers that computers can tell us something about brains, indeed, that the computer can serve as a model of the mind, which then comes to be seen as some manner of information processing machine, and possibly not as good at the job as the machine. (Roszak, 1994, pp. xiv-xv)
       The inner workings of the human mind are far more intricate than the most complicated systems of modern technology. Researchers in the field of artificial intelligence have been attempting to develop programs that will enable computers to display intelligent behavior. Although this field has been an active one for more than thirty-five years and has had many notable successes, AI researchers still do not know how to create a program that matches human intelligence. No existing program can recall facts, solve problems, reason, learn, and process language with human facility. This lack of success has occurred not because computers are inferior to human brains but rather because we do not yet know in sufficient detail how intelligence is organized in the brain. (Anderson, 1995, p. 2)

    Historical dictionary of quotations in cognitive science > Artificial Intelligence

  • 3 Memory

       To what extent can we lump together what goes on when you try to recall: (1) your name; (2) how you kick a football; and (3) the present location of your car keys? If we use introspective evidence as a guide, the first seems an immediate automatic response. The second may require constructive internal replay prior to our being able to produce a verbal description. The third... quite likely involves complex operational responses under the control of some general strategy system. Is any unitary search process, with a single set of characteristics and inputoutput relations, likely to cover all these cases? (Reitman, 1970, p. 485)
       [Semantic memory] Is a mental thesaurus, organized knowledge a person possesses about words and other verbal symbols, their meanings and referents, about relations among them, and about rules, formulas, and algorithms for the manipulation of these symbols, concepts, and relations. Semantic memory does not register perceptible properties of inputs, but rather cognitive referents of input signals. (Tulving, 1972, p. 386)
       The mnemonic code, far from being fixed and unchangeable, is structured and restructured along with general development. Such a restructuring of the code takes place in close dependence on the schemes of intelligence. The clearest indication of this is the observation of different types of memory organisation in accordance with the age level of a child so that a longer interval of retention without any new presentation, far from causing a deterioration of memory, may actually improve it. (Piaget & Inhelder, 1973, p. 36)
       4) The Logic of Some Memory Theorization Is of Dubious Worth in the History of Psychology
       If a cue was effective in memory retrieval, then one could infer it was encoded; if a cue was not effective, then it was not encoded. The logic of this theorization is "heads I win, tails you lose" and is of dubious worth in the history of psychology. We might ask how long scientists will puzzle over questions with no answers. (Solso, 1974, p. 28)
       We have iconic, echoic, active, working, acoustic, articulatory, primary, secondary, episodic, semantic, short-term, intermediate-term, and longterm memories, and these memories contain tags, traces, images, attributes, markers, concepts, cognitive maps, natural-language mediators, kernel sentences, relational rules, nodes, associations, propositions, higher-order memory units, and features. (Eysenck, 1977, p. 4)
       The problem with the memory metaphor is that storage and retrieval of traces only deals [ sic] with old, previously articulated information. Memory traces can perhaps provide a basis for dealing with the "sameness" of the present experience with previous experiences, but the memory metaphor has no mechanisms for dealing with novel information. (Bransford, McCarrell, Franks & Nitsch, 1977, p. 434)
       7) The Results of a Hundred Years of the Psychological Study of Memory Are Somewhat Discouraging
       The results of a hundred years of the psychological study of memory are somewhat discouraging. We have established firm empirical generalisations, but most of them are so obvious that every ten-year-old knows them anyway. We have made discoveries, but they are only marginally about memory; in many cases we don't know what to do with them, and wear them out with endless experimental variations. We have an intellectually impressive group of theories, but history offers little confidence that they will provide any meaningful insight into natural behavior. (Neisser, 1978, pp. 12-13)
       A schema, then is a data structure for representing the generic concepts stored in memory. There are schemata representing our knowledge about all concepts; those underlying objects, situations, events, sequences of events, actions and sequences of actions. A schema contains, as part of its specification, the network of interrelations that is believed to normally hold among the constituents of the concept in question. A schema theory embodies a prototype theory of meaning. That is, inasmuch as a schema underlying a concept stored in memory corresponds to the mean ing of that concept, meanings are encoded in terms of the typical or normal situations or events that instantiate that concept. (Rumelhart, 1980, p. 34)
       Memory appears to be constrained by a structure, a "syntax," perhaps at quite a low level, but it is free to be variable, deviant, even erratic at a higher level....
       Like the information system of language, memory can be explained in part by the abstract rules which underlie it, but only in part. The rules provide a basic competence, but they do not fully determine performance. (Campbell, 1982, pp. 228, 229)
       When people think about the mind, they often liken it to a physical space, with memories and ideas as objects contained within that space. Thus, we speak of ideas being in the dark corners or dim recesses of our minds, and of holding ideas in mind. Ideas may be in the front or back of our minds, or they may be difficult to grasp. With respect to the processes involved in memory, we talk about storing memories, of searching or looking for lost memories, and sometimes of finding them. An examination of common parlance, therefore, suggests that there is general adherence to what might be called the spatial metaphor. The basic assumptions of this metaphor are that memories are treated as objects stored in specific locations within the mind, and the retrieval process involves a search through the mind in order to find specific memories....
       However, while the spatial metaphor has shown extraordinary longevity, there have been some interesting changes over time in the precise form of analogy used. In particular, technological advances have influenced theoretical conceptualisations.... The original Greek analogies were based on wax tablets and aviaries; these were superseded by analogies involving switchboards, gramophones, tape recorders, libraries, conveyor belts, and underground maps. Most recently, the workings of human memory have been compared to computer functioning... and it has been suggested that the various memory stores found in computers have their counterparts in the human memory system. (Eysenck, 1984, pp. 79-80)
       Primary memory [as proposed by William James] relates to information that remains in consciousness after it has been perceived, and thus forms part of the psychological present, whereas secondary memory contains information about events that have left consciousness, and are therefore part of the psychological past. (Eysenck, 1984, p. 86)
       Once psychologists began to study long-term memory per se, they realized it may be divided into two main categories.... Semantic memories have to do with our general knowledge about the working of the world. We know what cars do, what stoves do, what the laws of gravity are, and so on. Episodic memories are largely events that took place at a time and place in our personal history. Remembering specific events about our own actions, about our family, and about our individual past falls into this category. With amnesia or in aging, what dims... is our personal episodic memories, save for those that are especially dear or painful to us. Our knowledge of how the world works remains pretty much intact. (Gazzaniga, 1988, p. 42)
       The nature of memory... provides a natural starting point for an analysis of thinking. Memory is the repository of many of the beliefs and representations that enter into thinking, and the retrievability of these representations can limit the quality of our thought. (Smith, 1990, p. 1)

    Historical dictionary of quotations in cognitive science > Memory

См. также в других словарях:

  • The Urantia Book —   Cover of the …   Wikipedia

  • The Value of Science — is a book by the French mathematician, physicist, and philosopher Henri Poincaré. It was published in 1905. The book deals with questions in the philosophy of science and adds detail to the topics addressed by Poincaré s previous book, Science… …   Wikipedia

  • The Year 3,000 — ( it. L Anno 3000) is a novel written by Italian writer and physician Paolo Mantegazza in 1897. It is a short romance which follows the typical utopian forecasting of life and society in the future, which was common at the end of the 19th century …   Wikipedia

  • The Botanic Garden — (1791) is a set of two poems, The Economy of Vegetation and The Loves of the Plants , by the British poet and naturalist Erasmus Darwin. The Economy of Vegetation celebrates technological innovation, scientific discovery and offers theories… …   Wikipedia

  • Discoveries of human feet on British Columbia beaches, 2007–2008 — Since August 2007, five disarticulated (i.e. legless) human feet have been discovered in coastal British Columbia: one left foot and four right feet. A right foot was also discovered on a beach in Washington. As of August 2008, only one of the… …   Wikipedia

  • Genetics and the Book of Mormon — The Book of Mormon, one of the four books of scripture of The Church of Jesus Christ of Latter day Saints (see Standard Works ), is an account of a three groups of people. Two of these groups originated from Israel. There is generally no support… …   Wikipedia

  • The Edge of Evolution — Infobox Book name = The Edge of Evolution author = Michael J. Behe cover artist = publisher = Free Press release date = June 5, 2007 media type = Hardcover/Audiobook (August 1, 2007) pages = 336 size weight = isbn = ISBN 0 743 29620 6 The Edge of …   Wikipedia

  • The Idler (1758–1760) — This article is about the 18th century series of essays. For other publications called The Idler, see The Idler (disambiguation). The Idler was a series of 103 essays, all but twelve of them by Samuel Johnson, published in the London weekly the… …   Wikipedia

  • The Age of Spiritual Machines — Infobox Book name = The Age of Spiritual Machines: When Computers Exceed Human Intelligence title orig = translator = image caption = author = Ray Kurzweil illustrator = cover artist = country = language = series = subject = genre = publisher =… …   Wikipedia

  • The Mechanical Universe — Infobox Television show name = The Mechanical Universe caption = format = Physics camera = picture format = audio format = runtime = creator = Dr. James F. Blinn developer = producer = executive producer = starring = Dr. David Goodstein narrated …   Wikipedia

  • The arts — This article is about Arts as a group of disciplines. For the philosophical concept of art, see Art. For other uses, see Art (disambiguation). Lincoln Center for the Performing Arts The arts are a vast subdivision of culture, composed of many… …   Wikipedia

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»